39 research outputs found

    EvoSplit: An evolutionary approach to split a multi-label data set into disjoint subsets

    Get PDF
    This paper presents a new evolutionary approach, EvoSplit, for the distribution of multi-label data sets into disjoint subsets for supervised machine learning. Currently, data set providers either divide a data set randomly or using iterative stratification, a method that aims to maintain the label (or label pair) distribution of the original data set into the different subsets. Following the same aim, this paper first introduces a single-objective evolutionary approach that tries to obtain a split that maximizes the similarity between those distributions independently. Second, a new multi-objective evolutionary algorithm is presented to maximize the similarity considering simultaneously both distributions (labels and label pairs). Both approaches are validated using well-known multi-label data sets as well as large image data sets currently used in computer vision and machine learning applications. EvoSplit improves the splitting of a data set in comparison to the iterative stratification following different measures: Label Distribution, Label Pair Distribution, Examples Distribution, folds and fold-label pairs with zero positive examples

    Profile hidden Markov models for foreground object modelling

    Get PDF
    Accurate background/foreground segmentation is a preliminary process essential to most visual surveillance applications. With the increasing use of freely moving cameras, strategies have been proposed to refine initial segmentation. In this paper, it is proposed to exploit the Vide-omics paradigm, and Profile Hidden Markov Models in particular, to create a new type of object descriptors relying on spatiotemporal information. Performance of the proposed methodology has been evaluated using a standard dataset of videos captured by moving cameras. Results show that usage of the proposed object descriptors allows better foreground extraction than standard approaches

    A Low-Dimensional Radial Silhouette-Based Feature for Fast Human Action Recognition Fusing Multiple Views

    Get PDF
    This paper presents a novel silhouette-based feature for vision-based human action recognition, which relies on the contour of the silhouette and a radial scheme. Its low-dimensionality and ease of extraction result in an outstanding proficiency for real-time scenarios. This feature is used in a learning algorithm that by means of model fusion of multiple camera streams builds a bag of key poses, which serves as a dictionary of known poses and allows converting the training sequences into sequences of key poses. These are used in order to perform action recognition by means of a sequence matching algorithm. Experimentation on three different datasets returns high and stable recognition rates. To the best of our knowledge, this paper presents the highest results so far on the MuHAVi-MAS dataset. Real-time suitability is given, since the method easily performs above video frequency. Therefore, the related requirements that applications as ambient-assisted living services impose are successfully fulfilled

    Automatic parameter tuning for functional regionalization methods

    Get PDF
    The methods used to define functional regions for public statistics and policy purposes need to establish several parameter values. This is typically achieved using expert knowledge based on qualitative judgements and lengthy consultations with local stakeholders. We propose to support this process by using an optimization algorithm to calibrate any regionalization method by identifying the parameter values that produce the best regionalization for a given quantitative indicator. The approach is exemplified by using a grid search and a genetic algorithm to configure the official methods employed in the UK and Sweden for the definition of their respective official concepts of local labour markets.Los métodos utilizados para definir las regiones funcionales con fines de estadística y políticas públicas deben establecer una serie de valores de ciertos parámetros. Esto se logra generalmente utilizando conocimiento experto basado en juicios cualitativos y largas consultas con las partes interesadas locales. Se propone apoyar este proceso utilizando un algoritmo de optimización para calibrar los métodos de regionalización mediante la identificación de los valores de los parámetros que producen la mejor regionalización para un determinado indicador cuantitativo. El enfoque se ejemplifica mediante el uso de una búsqueda por cuadrículas y un algoritmo genético para configurar los métodos oficiales empleados en el Reino Unido y en Suecia para la definición de sus respectivos conceptos oficiales de los mercados laborales locales.This work was supported by the Spanish Ministry of Economy and Competitiveness (grant numbers CSO2011-29943-C03-02 and CSO2014-55780-C3-2-P, National R&D&i Plan)

    Continuous human action recognition in ambient assisted living scenarios

    Get PDF
    Ambient assisted living technologies and services make it possible to help elderly and impaired people and increase their personal autonomy. Specifically, vision-based approaches enable the recognition of human behaviour, which in turn allows to build valuable services upon. However, a main constraint is that these have to be able to work online and in real time. In this work, a human action recognition method based on a bag-of-key-poses model and sequence alignment is extended to support continuous human action recognition. The detection of action zones is proposed to locate the most discriminative segments of an action. For the recognition, a method based on a sliding and growing window approach is presented. Furthermore, an evaluation scheme particularly designed for ambient assisted living scenarios is introduced. Experimental results on two publicly available datasets are provided. These show that the proposed action zones lead to a significant improvement and allow real-time processing

    From data acquisition to data fusion : a comprehensive review and a roadmap for the identification of activities of daily living using mobile devices

    Get PDF
    This paper focuses on the research on the state of the art for sensor fusion techniques, applied to the sensors embedded in mobile devices, as a means to help identify the mobile device user’s daily activities. Sensor data fusion techniques are used to consolidate the data collected from several sensors, increasing the reliability of the algorithms for the identification of the different activities. However, mobile devices have several constraints, e.g., low memory, low battery life and low processing power, and some data fusion techniques are not suited to this scenario. The main purpose of this paper is to present an overview of the state of the art to identify examples of sensor data fusion techniques that can be applied to the sensors available in mobile devices aiming to identify activities of daily living (ADLs)

    Performance analysis of self-organising neural networks tracking algorithms for intake monitoring using kinect

    Get PDF
    The analysis of intake behaviour is a key factor to understand the health condition of a subject, such as elderly or people affected by diet-related disorders. The technology can be exploited for this purpose to promptly identify anomalous situations. This paper presents a comparison between three unsupervised machine learning algorithms used to track the movements performed by a person during an intake action and provides experimental results showing the best performing algorithm among those compared

    Automated detection of hands and objects in egocentric videos for ambient assisted living applications

    Get PDF
    The need for technology assisted (or ambient assisted) living is increasing all the time as the population ages and the number of people with dementia and other conditions impairing memory and cognitive ability increases. In such applications, amongst others, it is necessary to identify and assess potentially hazardous situations. These include scenarios involving a person’s hands and their interactions with various objects. In this paper, we describe our novel approach to identify human hands and objects in videos of people performing a variety of everyday tasks. We compare the performance of our method using different strategies with that of other state of the art approaches. We conclude that, when the proposed approach takes advantage of a pre-trained model, hand detection is performed accurately (94%), providing reliable information for assisted living applications

    Evolutionary joint selection to improve human action recognition with RGB-D devices

    Get PDF
    Interest in RGB-D devices is increasing due to their low price and the wide range of possible applications that come along. These devices provide a marker-less body pose estimation by means of skeletal data consisting of 3D positions of body joints. These can be further used for pose, gesture or action recognition. In this work, an evolutionary algorithm is used to determine the optimal subset of skeleton joints, taking into account the topological structure of the skeleton, in order to improve the final success rate. The proposed method has been validated using a state-of-the-art RGB action recognition approach, and applying it to the MSR-Action3D dataset. Results show that the proposed algorithm is able to significantly improve the initial recognition rate and to yield similar or better success rates than the state-of-the-art methods.This work has been partially supported by the European Commission under project “caring4U – A study on people activity in private spaces: towards a multisensor network that meets privacy requirements” (PIEF-GA-2010-274649) and by the Spanish Ministry of Science and Innovation under project “Sistema de visión para la monitorización de la actividad de la vida diaria en el hogar” (TIN2010-20510-C04-02). Alexandros Andre Chaaraoui and José Ramón Padilla-López acknowledge financial support by the Conselleria d’Educació, Formació i Ocupació of the Generalitat Valenciana (fellowships ACIF/2011/160 and ACIF/2012/064 respectively)

    Recognition of Activities of Daily Living with Egocentric Vision: A Review.

    Get PDF
    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory
    corecore